39 research outputs found

    AI, democracy, and the importance of asking the right questions

    Get PDF
    Democracy is widely praised as a great achievement of humanity. However, in recent years there has been an increasing amount of concern that its functioning across the world may be eroding. In response, efforts to combat such change are emerging. Considering the pervasiveness of technology and its increasing capabilities, it is no surprise that there has been much focus on the use of artificial intelligence (AI) to this end. Questions as to how AI can be best utilized to extend the reach of democracy to currently non-democratic countries, how the involvement in the democratic process of certain demographic groups (e.g. ethnic minorities, women, and young people) can be increased, etc. are frequent topics of discussion. In this article I would like not merely to question whether this is desirable but rather argue that we should be trying to envisage ways of using AI for the exact opposite purpose: that of replacing democratic systems with better alternatives.Publisher PDFPeer reviewe

    On the value of life

    Get PDF
    That life has value is a tenet eliciting all but universal agreement, be it amongst philosophers, policy-makers, or the general public. Yet, when it comes to its employment in practice, especially in the context of policies which require the balancing of different moral choices—for example in health care, foreign aid, or animal rights related decisions—it takes little for cracks to appear and for disagreement to arise as to what the value of life actually means and how it should guide our actions in the real world. I argue that in no small part this state of affairs is a consequence of the infirmity of the foundations that the claim respecting the value of life supervenes upon once its theological foundations are abandoned. Hence, I depart radically from the contemporary thought and argue that life has no inherent value. Far from lowering the portcullis to Pandemonium, the abandonment of the quasi-Platonistic claim that life has intrinsic value, when understood and applied correctly, leads to a comprehensive, consistent, and compassionate ethical framework for understanding the related problems. I illustrate this using several hotly debated topics, including speciesism and show how the ideas I introduce help us to interpret people’s choices and to resolve outstanding challenges which present an insurmountable obstacle to the existing ethical theories.PostprintPeer reviewe

    Principled and data efficient support vector machine training using the minimum description length principle, with application in breast cancer

    Get PDF
    Support vector machines (SVMs) are established as highly successful classifiers in a broad range of applications, including numerous medical ones. Nevertheless, their current employment is restricted by a limitation in the manner in which they are trained, most often the training-validation-test or k-fold cross-validation approaches, which are wasteful both in terms of the use of the available data as well as computational resources. This is a particularly important consideration in many medical problems, in which data availability is low (be it because of the inherent difficulty in obtaining sufficient data, or because of practical reasons, e.g. pertaining to privacy and data sharing). In this paper we propose a novel approach to training SVMs which does not suffer from the aforementioned limitation, which is at the same time much more rigorous in nature, being built upon solid information theoretic grounds. Specifically, we show how the training process, that is the process of hyperparameter inference, can be formulated as a search for the optimal model under the minimum description length (MDL) criterion, allowing for theory rather than empiricism driven selection and removing the need for validation data. The effectiveness and superiority of our approach are demonstrated on the Wisconsin Diagnostic Breast Cancer Data Set.PostprintPeer reviewe

    Whole slide image understanding in pathology : what is the salient scale of analysis?

    Get PDF
    Background: In recent years, there has been increasing research in the applications of Artificial Intelligence in the medical industry. Digital pathology has seen great success in introducing the use of technology in the digitisation and analysis of pathology slides to ease the burden of work on pathologists. Digitised pathology slides, otherwise known as whole slide images, can be analysed by pathologists with the same methods used to analyse traditional glass slides. Methods: The digitisation of pathology slides has also led to the possibility of using these whole slide images to train machine learning models to detect tumours. Patch-based methods are common in the analysis of whole slide images as these images are too large to be processed using normal machine learning methods. However, there is little work exploring the effect that the size of the patches has on the analysis. A patch-based whole slide image analysis method was implemented and then used to evaluate and compare the accuracy of the analysis using patches of different sizes. In addition, two different patch sampling methods are used to test if the optimal patch size is the same for both methods, as well as a downsampling method where whole slide images of low resolution images are used to train an analysis model. Results: It was discovered that the most successful method uses a patch size of 256 Ă— 256 pixels with the informed sampling method, using the location of tumour regions to sample a balanced dataset. Conclusion: Future work on batch-based analysis of whole slide images in pathology should take into account our findings when designing new models.Peer reviewe

    A whole-slide is greater than the sum of its...patches

    Get PDF
    Muscular-invasive bladder cancer (MIBC) is a common formof cancer which can necessitate complex treatment decisions.Different methods involving machine learning have been developed with the goal of improving and making MIBC diagnosis more specific, and thus limiting the amount of invasive testing needed for MIBC patients. A particularly fruitful direction of research involves the use of tissue images and the application of deep learning. In order to deal with extremelylarge whole slide images (WSIs), the state of the art methodsapproach the problem by using a patch-based convolutionalneural network which takes small patches (often 256 Ă— 256pixels) of WSIs as input and provides a classification of cancerous or not-cancerous as output. Patch-to-slide classification is then often achieved by classifying a WSI as cancerous if and only if the majority of its patches are classified as cancerous. In this work we compare different approaches to the integration of local, patch based decisions, as a means of arriving at a robust global, WSI based classification. Our results suggest that an absolute, positive patch count based decisionmaking, with an appropriately learnt threshold, achieves the best results.PostprintPeer reviewe

    Determining chess game state from an image

    Get PDF
    Identifying the configuration of chess pieces from an image of a chessboard is a problem in computer vision that has not yet been solved accurately. However, it is important for helping amateur chess players improve their games by facilitating automatic computer analysis without the overhead of manually entering the pieces. Current approaches are limited by the lack of large datasets and are not designed to adapt to unseen chess sets. This paper puts forth a new dataset synthesised from a 3D model that is an order of magnitude larger than existing ones. Trained on this dataset, a novel end-to-end chess recognition system is presented that combines traditional computer vision techniques with deep learning. It localises the chessboard using a RANSAC-based algorithm that computes a projective transformation of the board onto a regular grid. Using two convolutional neural networks, it then predicts an occupancy mask for the squares in the warped image and finally classifies the pieces. The described system achieves an error rate of 0.23% per square on the test set, 28 times better than the current state of the art. Further, a few-shot transfer learning approach is developed that is able to adapt the inference system to a previously unseen chess set using just two photos of the starting position, obtaining a per-square accuracy of 99.83% on images of that new chess set. The code, dataset, and trained models are made available online.Publisher PDFPeer reviewe

    How good is the science that informs government policy? A lesson from the U.K.’s response to 2020 CoV-2 outbreak

    Get PDF
    In an era when public faith in politicians is dwindling, yet trust in scientists remains relatively high, governments are increasingly emphasizing the role of science based policy-making in response to challenges such as climate change and global pandemics. In this paper we question the quality of some scientific advice given to governments and the robustness and transparency of the entire framework which envelopes such advice, all of which raise serious ethical concerns. In particular we focus on the so-called Imperial Model which heavily influenced the government of the United Kingdom in devising its response to the COVID-19 crisis. We focus on and highlight several fundamental methodological flaws of the model, raise concerns as to the robustness of the system which permitted these to remain unchallenged, and discuss the relevant ethical consequences.Publisher PDFPeer reviewe

    A Siamese transformer network for zero-shot ancient coin classification

    Get PDF
    Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of attributing a coin from an image, that is of identifying its issue. This may be considered the cardinal problem in the field and it continues to challenge automatic methods. In the present paper, we address a number of limitations of previous work. Firstly, the existing methods approach the problem as a classification task. As such, they are unable to deal with classes with no or few exemplars (which would be most, given over 50,000 issues of Roman Imperial coins alone), and require retraining when exemplars of a new class become available. Hence, rather than seeking to learn a representation that distinguishes a particular class from all the others, herein we seek a representation that is overall best at distinguishing classes from one another, thus relinquishing the demand for exemplars of any specific class. This leads to our adoption of the paradigm of pairwise coin matching by issue, rather than the usual classification paradigm, and the specific solution we propose in the form of a Siamese neural network. Furthermore, while adopting deep learning, motivated by its successes in the field and its unchallenged superiority over classical computer vision approaches, we also seek to leverage the advantages that transformers have over the previously employed convolutional neural networks, and in particular their non-local attention mechanisms, which ought to be particularly useful in ancient coin analysis by associating semantically but not visually related distal elements of a coin’s design. Evaluated on a large data corpus of 14,820 images and 7605 issues, using transfer learning and only a small training set of 542 images of 24 issues, our Double Siamese ViT model is shown to surpass the state of the art by a large margin, achieving an overall accuracy of 81%. Moreover, our further investigation of the results shows that the majority of the method’s errors are unrelated to the intrinsic aspects of the algorithm itself, but are rather a consequence of unclean data, which is a problem that can be easily addressed in practice by simple pre-processing and quality checking.Publisher PDFPeer reviewe

    Automated methods for tuberculosis detection/diagnosis : a literature review

    Get PDF
    Funding: Welcome Trust Institutional Strategic Support fund of the University of St Andrews, grant code 204821/Z/16/Z.Tuberculosis (TB) is one of the leading infectious causes of death worldwide. The effective management and public health control of this disease depends on early detection and careful treatment monitoring. For many years, the microscopy-based analysis of sputum smears has been the most common method to detect and quantify Mycobacterium tuberculosis (Mtb) bacteria. Nonetheless, this form of analysis is a challenging procedure since sputum examination can only be reliably performed by trained personnel with rigorous quality control systems in place. Additionally, it is affected by subjective judgement. Furthermore, although fluorescence-based sample staining methods have made the procedure easier in recent years, the microscopic examination of sputum is a time-consuming operation. Over the past two decades, attempts have been made to automate this practice. Most approaches have focused on establishing an automated method of diagnosis, while others have centred on measuring the bacterial load or detecting and localising Mtb cells for further research on the phenotypic characteristics of their morphology. The literature has incorporated machine learning (ML) and computer vision approaches as part of the methodology to achieve these goals. In this review, we first gathered publicly available TB sputum smear microscopy image sets and analysed the disparities in these datasets. Thereafter, we analysed the most common evaluation metrics used to assess the efficacy of each method in its particular field. Finally, we generated comprehensive summaries of prior work on ML and deep learning (DL) methods for automated TB detection, including a review of their limitations.Publisher PDFPeer reviewe

    Whole slide images and patches of clear cell renal cell carcinoma tissue sections counterstained with Hoechst 33342, CD3, and CD8 using multiple immunofluorescence

    Get PDF
    Funding: G.W. is supported by Lothian NHS. This project received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101017453 as part of the KATY project. This work is supported in part by the Industrial Centre for AI Research in Digital Diagnostics (iCAIRD) which is funded by Innovate UK on behalf of UK Research and Innovation (UKRI) (project number 104690).In recent years, there has been an increased effort to digitise whole-slide images of cancer tissue. This effort has opened up a range of new avenues for the application of deep learning in oncology. One such avenue is virtual staining, where a deep learning model is tasked with reproducing the appearance of stained tissue sections, conditioned on a different, often times less expensive, input stain. However, data to train such models in a supervised manner where the input and output stains are aligned on the same tissue sections are scarce. In this work, we introduce a dataset of ten whole-slide images of clear cell renal cell carcinoma tissue sections counterstained with Hoechst 33342, CD3, and CD8 using multiple immunofluorescence. We also provide a set of over 600,000 patches of size 256 × 256 pixels extracted from these images together with cell segmentation masks in a format amenable to training deep learning models. It is our hope that this dataset will be used to further the development of deep learning methods for digital pathology by serving as a dataset for comparing and benchmarking virtual staining models.Publisher PDFPeer reviewe
    corecore